- Home
- Search Results
- Page 1 of 1
Search for: All records
-
Total Resources2
- Resource Type
-
0000000002000000
- More
- Availability
-
20
- Author / Contributor
- Filter by Author / Creator
-
-
Aczel, Balazs (2)
-
Albers, Casper J (2)
-
Botvinik-Nezer, Rotem (2)
-
Busch, Niko A (2)
-
Cataldo, Andrea M (2)
-
Dreber, Anna (2)
-
Hoekstra, Rink (2)
-
Holzmeister, Felix (2)
-
Johannesson, Magnus (2)
-
Kirchler, Michael (2)
-
Matzke, Dora (2)
-
Nilsonne, Gustav (2)
-
Sarafoglou, Alexandra (2)
-
Schweinsberg, Martin (2)
-
Simons, Daniel J (2)
-
Spellman, Barbara A (2)
-
Szaszi, Barnabas (2)
-
Wagenmakers, Eric-Jan (2)
-
Wicherts, Jelte (2)
-
Althoff, Tim (1)
-
- Filter by Editor
-
-
& Spizer, S. M. (0)
-
& . Spizer, S. (0)
-
& Ahn, J. (0)
-
& Bateiha, S. (0)
-
& Bosch, N. (0)
-
& Brennan K. (0)
-
& Brennan, K. (0)
-
& Chen, B. (0)
-
& Chen, Bodong (0)
-
& Drown, S. (0)
-
& Ferretti, F. (0)
-
& Higgins, A. (0)
-
& J. Peters (0)
-
& Kali, Y. (0)
-
& Ruiz-Arias, P.M. (0)
-
& S. Spitzer (0)
-
& Sahin. I. (0)
-
& Spitzer, S. (0)
-
& Spitzer, S.M. (0)
-
(submitted - in Review for IEEE ICASSP-2024) (0)
-
-
Have feedback or suggestions for a way to improve these results?
!
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Many-analysts studies explore how well an empirical claim withstands plausible alternative analyses of the same dataset by multiple, independent analysis teams. Conclusions from these studies typically rely on a single outcome metric (e.g. effect size) provided by each analysis team. Although informative about the range of plausible effects in a dataset, a single effect size from each team does not provide a complete, nuanced understanding of how analysis choices are related to the outcome. We used the Delphi consensus technique with input from 37 experts to develop an 18-item subjective evidence evaluation survey (SEES) to evaluate how each analysis team views the methodological appropriateness of the research design and the strength of evidence for the hypothesis. We illustrate the usefulness of the SEES in providing richer evidence assessment with pilot data from a previous many-analysts study.more » « less
-
Aczel, Balazs; Szaszi, Barnabas; Nilsonne, Gustav; van den Akker, Olmo R; Albers, Casper J; van Assen, Marcel ALM; Bastiaansen, Jojanneke A; Benjamin, Daniel; Boehm, Udo; Botvinik-Nezer, Rotem; et al (, eLife)Any large dataset can be analyzed in a number of ways, and it is possible that the use of different analysis strategies will lead to different results and conclusions. One way to assess whether the results obtained depend on the analysis strategy chosen is to employ multiple analysts and leave each of them free to follow their own approach. Here, we present consensus-based guidance for conducting and reporting such multi-analyst studies, and we discuss how broader adoption of the multi-analyst approach has the potential to strengthen the robustness of results and conclusions obtained from analyses of datasets in basic and applied research.more » « less
An official website of the United States government
